Goto

Collaborating Authors

 modus ponen inference rule


An Epistemic Perspective on Agent Awareness

Naumov, Pavel, Pavlova, Alexandra

arXiv.org Artificial Intelligence

The paper proposes to treat agent awareness as a form of knowledge, breaking the tradition in the existing literature on awareness. It distinguishes the de re and de dicto forms of such knowledge. The work introduces two modalities capturing these forms and formally specifies their meaning using a version of 2D-semantics. The main technical result is a sound and complete logical system describing the interplay between the two proposed modalities and the standard "knowledge of the fact" modality.


Dynamic Logic of Trust-Based Beliefs

Jiang, Junli, Naumov, Pavel, Zhang, Wenxuan

arXiv.org Artificial Intelligence

Traditionally, an agent's beliefs would come from what the agent can see, hear, or sense. In the modern world, beliefs are often based on the data available to the agents. In this work, we investigate a dynamic logic of such beliefs that incorporates public announcements of data. The main technical contribution is a sound and complete axiomatisation of the interplay between data-informed beliefs and data announcement modalities. We also describe a non-trivial polynomial model checking algorithm for this logical system.


The Logic of Doxastic Strategies

Jiang, Junli, Naumov, Pavel

arXiv.org Artificial Intelligence

In many real-world situations, there is often not enough information to know that a certain strategy will succeed in achieving the goal, but there is a good reason to believe that it will. The paper introduces the term ``doxastic'' for such strategies. The main technical contribution is a sound and complete logical system that describes the interplay between doxastic strategy and belief modalities.


Shhh! The Logic of Clandestine Operations

Naumov, Pavel, Orejola, Oliver

arXiv.org Artificial Intelligence

An operation is called covert if it conceals the identity of the actor; it is called clandestine if the very fact that the operation is conducted is concealed. The paper proposes a formal semantics of clandestine operations and introduces a sound and complete logical system that describes the interplay between the distributed knowledge modality and a modality capturing coalition power to conduct clandestine operations.


Intelligence in Strategic Games

Naumov, Pavel | Yuan, Yuan (Vassar College)

Journal of Artificial Intelligence Research

If an agent, or a coalition of agents, has a strategy, knows that she has a strategy, and knows what the strategy is, then she has a know-how strategy. Several modal logics of coalition power for know-how strategies have been studied before. The contribution of the article is three-fold. First, it proposes a new class of know-how strategies that depend on the intelligence information about the opponents' actions. Second, it shows that the coalition power modality for the proposed new class of strategies cannot be expressed through the standard know-how modality. Third, it gives a sound and complete logical system that describes the interplay between the coalition power modality with intelligence and the distributed knowledge modality in games with imperfect information.


If You're Happy, Then You Know It: The Logic of Happiness... and Sadness

Azimipour, Sanaz, Naumov, Pavel

arXiv.org Artificial Intelligence

To be able to understand and predict human actions, artificial agents must be able to identify, comprehend, and reason about human emotions. Different formal models of human emotions have been studied in AI literature. Doyle, Shoham, and Wellman propose a logic of relative desire [1]. Lang, Van Der Torre, and Weydert introduce utilitarian desires [2]. Meyer states logical principles aiming at capturing anger and fear [3]. Steunebrink, Dastani, and Meyer expand this work to hope [4]. Adam, Herzig, and Longin propose formal definitions of hope, fear, relief, disappointment, resentment, gloating, pride, shame, admiration, reproach, gratification, remorse, gratitude, and anger [5].


Comprehension and Knowledge

Naumov, Pavel, Ros, Kevin

arXiv.org Artificial Intelligence

The ability of an agent to comprehend a sentence is tightly connected to the agent's prior experiences and background knowledge. The paper suggests to interpret comprehension as a modality and proposes a complete bimodal logical system that describes an interplay between comprehension and knowledge modalities.


Epistemic Logic of Know-Who

Epstein, Sophia, Naumov, Pavel

arXiv.org Artificial Intelligence

The paper suggests a definition of "know who" as a modality using Grove-Halpern semantics of names. It also introduces a logical system that describes the interplay between modalities "knows who", "knows", and "for all agents". The main technical result is a completeness theorem for the proposed system.


Duty to Warn in Strategic Games

Naumov, Pavel, Tao, Jia

arXiv.org Artificial Intelligence

The paper investigates the second-order blameworthiness or duty to warn modality "one coalition knew how another coalition could have prevented an outcome". The main technical result is a sound and complete logical system that describes the interplay between the distributed knowledge and the duty to warn modalities.


Blameworthiness in Security Games

Naumov, Pavel, Tao, Jia

arXiv.org Artificial Intelligence

Security games are an example of a successful real-world application of game theory. The paper defines blameworthiness of the defender and the attacker in security games using the principle of alternative possibilities and provides a sound and complete logical system for reasoning about blameworthiness in such games. Introduction In this paper we study the properties of blameworthiness in security games (von Stackelberg 1934). Security games are used for canine airport patrol (Pita et al. 2008; Jain et al. 2010), airport passenger screening (Brown et al. 2016), protecting endangered animals and fish stocks (Fang, Stone, and Tambe 2015), U.S. Coast Guard port patrol (Sinha et al. 2018; An, Tambe, and Sinha 2016), and randomized deployment of U.S. air marshals (Sinha et al. 2018). Defender \Attacker Terminal 1 Terminal 2 Terminal 1 20 120 Terminal 2 200 16 Figure 1: Expected Human Losses in Security Game G 1. As an example, consider a security game G 1 in which a defender is trying to protect two terminals in an airport from an attacker. Due to limited resources, the defender can patrol only one terminal at a given time. If the defender chooses to patrol Terminal 1 and the attacker chooses to attack Terminal 2, then the human losses at Terminal 2 are estimated at 120, see Figure 1. However, if the defender chooses to patrol Terminal 2 while the attacker still chooses to attack Terminal 2, then the expected number of the human losses at Terminal 2 is only 16, see Figure 1. Generally speaking, the goal of the defender is to minimize human losses, while the goal of the attacker is to maximize them. However, the utility functions in security games usually take into account not only the human losses, but also the cost to protect and to attack the target to the defender and the attacker respectively.